Skip to content

Conversation

tangledbytes
Copy link
Member

@tangledbytes tangledbytes commented Sep 15, 2025

Explain the Changes

This PR adds support for GLACIER_DA storage class and adds support forced GLACIER storage class. With the changes in this PR, if NSFS_GLACIER_FORCE_STORAGE_CLASS = true then when no storage class is provided OR when STANDARD storage class is provided, it will be treated as GLACIER storage class object. The following combinations are expected and supported:

  1. NSFS_GLACIER_FORCE_STORAGE_CLASS = false; DENY_UPLOAD_TO_STORAGE_CLASS_STANDARD = false - No storage class or storage class STANDARD would work.
  2. NSFS_GLACIER_FORCE_STORAGE_CLASS = false; DENY_UPLOAD_TO_STORAGE_CLASS_STANDARD = true - No storage class or storage class STANDARD would not work.
  3. NSFS_GLACIER_FORCE_STORAGE_CLASS = true; DENY_UPLOAD_TO_STORAGE_CLASS_STANDARD = true - No storage class or storage class STANDARD would work however the storage class would be implicitly changed to GLACIER.
  4. NSFS_GLACIER_FORCE_STORAGE_CLASS = true; DENY_UPLOAD_TO_STORAGE_CLASS_STANDARD = false - Same as above.

The above tries to achive the ask which was:

So, just to summarize,

  • if customer specified Glacier FR, Glacier FR is set in Deep Archive (as of today)
  • Glacier DA -> Glacier DA (we are rejecting today)
  • Standard -> Glacier FR
    • For a specific customer, a special setting will be enabled so that Standard -> Standard
  • (nothing specified) -> Glacier FR
    • For a specific customer, a special setting will be enabled so that the unspecified case -> Standard
  • any other classes -> error with InvalidStorageClass

Issues: Fixed #xxx / Gap #xxx

#9184

Testing Instructions:

  • Doc added/updated
  • Tests added

Summary by CodeRabbit

  • New Features

    • Added support for a new GLACIER_DA storage class.
    • NamespaceFS now recognizes multiple Glacier classes (GLACIER, GLACIER_IR, GLACIER_DA).
  • Configuration

    • New optional setting to force Glacier storage class when none or STANDARD is requested (opt-in, default: off).
  • Improvements

    • Storage-class normalization/override now happens earlier in request handling for more consistent behavior.

Copy link

coderabbitai bot commented Sep 15, 2025

Walkthrough

Adds a config flag to optionally force GLACIER storage class, introduces GLACIER_DA storage class and parsing, updates NamespaceFS to treat multiple Glacier classes uniformly, and injects a request-time override in the S3 REST flow to rewrite STANDARD to GLACIER when configured.

Changes

Cohort / File(s) Summary of Changes
Config flag
config.js
Adds exported boolean NSFS_GLACIER_FORCE_STORAGE_CLASS (default false) with JSDoc stating it forces GLACIER if no storage class provided or if STANDARD is provided.
S3 utils: storage class handling
src/endpoint/s3/s3_utils.js
Adds STORAGE_CLASS_GLACIER_DA constant and export; extends parse_storage_class to recognize it; adds and exports override_storage_class(req) which rewrites x-amz-storage-class to GLACIER when config.NSFS_GLACIER_FORCE_STORAGE_CLASS is enabled and resolved class == STANDARD.
NamespaceFS Glacier recognition
src/sdk/namespace_fs.js
Introduces glacier_storage_classes including GLACIER, GLACIER_DA, GLACIER_IR and uses membership check (includes) so any of them are treated as Glacier for NSFS logic.
Type declarations
src/sdk/nb.d.ts
Extends StorageClass union to include 'GLACIER_DA'.
S3 REST integration
src/endpoint/s3/s3_rest.js
Calls s3_utils.override_storage_class(req) immediately after header validation, before further request enrichment/redirect handling.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant C as Client
  participant R as S3 REST Handler
  participant U as s3_utils
  participant N as NamespaceFS / Downstream

  C->>R: PUT/POST with headers (x-amz-storage-class)
  Note over R: After header validation
  R->>U: override_storage_class(req)
  alt NSFS_GLACIER_FORCE_STORAGE_CLASS = true
    U-->>R: rewrite `x-amz-storage-class` -> `GLACIER` if resolved class == `STANDARD`
  else NSFS_GLACIER_FORCE_STORAGE_CLASS = false
    U-->>R: no change
  end
  R->>N: continue processing (populate_request_additional_info_or_redirect)
  N-->>C: response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • guymguym
  • dannyzaken
  • jackyalbo

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title clearly and concisely describes the main changes by indicating support for the new GLACIER_DA storage class and the forced GLACIER behavior, which matches the core objectives of this PR. It avoids ambiguous terms and uses precise language relevant to the changeset. The prefix “[NSFS | GLACIER]” contextualizes the subsystem without listing files or unrelated details.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@tangledbytes tangledbytes force-pushed the utkarsh/feat/glacier-da-storage-class branch from 1f45c29 to d8ab6ce Compare September 15, 2025 11:45
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
src/sdk/namespace_fs.js (4)

1076-1084: Reads from GLACIER_DA must require restore as well

Currently only GLACIER is blocked until restored; GLACIER_DA objects would be readable immediately. If GLACIER_DA represents Deep Archive, this violates expected semantics.

Apply:

-                    if (obj_storage_class === s3_utils.STORAGE_CLASS_GLACIER) {
+                    if (obj_storage_class === s3_utils.STORAGE_CLASS_GLACIER ||
+                        obj_storage_class === s3_utils.STORAGE_CLASS_GLACIER_DA) {
                         if (obj_restore_status?.ongoing || !obj_restore_status?.expiry_time) {
                             dbg.warn('read_object_stream: object is not restored yet', obj_restore_status);
                             throw new S3Error(S3Error.InvalidObjectState);
                         }
                     }

1374-1392: Copy semantics for GLACIER_DA

Server-side copy fallback and “must be restored before copy” checks apply only to GLACIER; extend to GLACIER_DA to avoid linking/copying archived content improperly.

Apply:

-            if (src_storage_class === s3_utils.STORAGE_CLASS_GLACIER) {
+            if (src_storage_class === s3_utils.STORAGE_CLASS_GLACIER ||
+                src_storage_class === s3_utils.STORAGE_CLASS_GLACIER_DA) {
                 if (src_restore_status?.ongoing || !src_restore_status?.expiry_time) {
                     dbg.warn('_validate_upload: object is not restored yet', src_restore_status);
                     throw new S3Error(S3Error.InvalidObjectState);
                 }
-
-                return true;
+                return true;
             }
@@
-        return params.copy_source && params.storage_class === s3_utils.STORAGE_CLASS_GLACIER;
+        return params.copy_source &&
+            [s3_utils.STORAGE_CLASS_GLACIER, s3_utils.STORAGE_CLASS_GLACIER_DA]
+                .includes(params.storage_class);

1433-1440: Migration WAL for GLACIER_DA

If GLACIER_DA requires migration, include it here; otherwise explain why it’s excluded.

Apply:

-            if (params.storage_class === s3_utils.STORAGE_CLASS_GLACIER) {
+            if (params.storage_class === s3_utils.STORAGE_CLASS_GLACIER ||
+                params.storage_class === s3_utils.STORAGE_CLASS_GLACIER_DA) {
                 await this.append_to_migrate_wal(file_path);
             }

1-1: Treat GLACIER_DA like GLACIER in all glacier-specific checks

Several places compare storage_class strictly to s3_utils.STORAGE_CLASS_GLACIER and will miss STORAGE_CLASS_GLACIER_DA — this causes incorrect restore/read/listing/expiry behavior. Fix by checking both constants or adding/using a central is_glacier(storage_class) helper.

Locations to change:

  • src/sdk/glacier.js:314–317 — get_restore_status returns undefined unless STORAGE_CLASS_GLACIER.
  • src/sdk/namespace_fs.js:1076–1084 — read_object_stream gate uses obj_storage_class === s3_utils.STORAGE_CLASS_GLACIER.
  • src/sdk/namespace_fs.js:1376–1386 — copy-source validation uses src_storage_class === s3_utils.STORAGE_CLASS_GLACIER.
  • src/sdk/namespace_fs.js:2296–2304 — restore_object assumes undefined means "not glacier" (relies on get_restore_status).
  • src/sdk/namespace_fs.js:3690–3694 — _glacier_force_expire_on_get checks storage_class !== s3_utils.STORAGE_CLASS_GLACIER.
  • src/endpoint/s3/ops/s3_get_object.js:42–50 — S3 GET blocks reads only when object_md.storage_class === s3_utils.STORAGE_CLASS_GLACIER.
🧹 Nitpick comments (5)
config.js (1)

199-206: Clarify force semantics and guard against misconfig

As written, NSFS_GLACIER_FORCE_STORAGE_CLASS can rewrite uploads to GLACIER even when NSFS_GLACIER_ENABLED is false, which would later be rejected by NamespaceFS._is_storage_class_supported. Please document that forcing requires NSFS_GLACIER_ENABLED=true, or gate the rewrite accordingly in the header-processing layer.

Would you like me to add a small guard in the header rewrite to check config.NSFS_GLACIER_ENABLED first and log a warning if forcing is set without enabling Glacier?

src/endpoint/s3/s3_utils.js (1)

24-25: Define and export a single source of truth for Glacier-class set

Avoid scattering the set of Glacier-like classes across files. Export a frozen list here and reuse it elsewhere (e.g., namespace_fs).

Apply:

 /** @type {nb.StorageClass} */
 const STORAGE_CLASS_GLACIER_IR = 'GLACIER_IR'; // "S3 Glacier Instant Retrieval"
 /** @type {nb.StorageClass} */
 const STORAGE_CLASS_GLACIER_DA = 'GLACIER_DA'; // "DBS3 specific Storage Class"
+
+// Classes that represent cold/archive semantics in NSFS
+const GLACIER_CLASSES = Object.freeze([
+  STORAGE_CLASS_GLACIER,
+  STORAGE_CLASS_GLACIER_DA,
+  STORAGE_CLASS_GLACIER_IR,
+]);

And simplify parse:

 function parse_storage_class(storage_class) {
   if (!storage_class) return STORAGE_CLASS_STANDARD;
   if (storage_class === STORAGE_CLASS_STANDARD) return STORAGE_CLASS_STANDARD;
-  if (storage_class === STORAGE_CLASS_GLACIER) return STORAGE_CLASS_GLACIER;
-  if (storage_class === STORAGE_CLASS_GLACIER_DA) return STORAGE_CLASS_GLACIER_DA;
-  if (storage_class === STORAGE_CLASS_GLACIER_IR) return STORAGE_CLASS_GLACIER_IR;
+  if (GLACIER_CLASSES.includes(storage_class)) return storage_class;
   throw new Error(`No such s3 storage class ${storage_class}`);
 }

Export it:

 exports.STORAGE_CLASS_GLACIER_IR = STORAGE_CLASS_GLACIER_IR;
 exports.STORAGE_CLASS_GLACIER_DA = STORAGE_CLASS_GLACIER_DA;
+exports.GLACIER_CLASSES = GLACIER_CLASSES;
src/sdk/namespace_fs.js (3)

3613-3623: Support matrix: force vs enable mismatch

Good to include GLACIER_DA and GLACIER_IR in the supported set. One caveat: when NSFS_GLACIER_FORCE_STORAGE_CLASS=true (header rewrite to GLACIER), but NSFS_GLACIER_ENABLED=false, uploads will be rejected here. Either document that forcing implies enabling or allow forcing to bypass this check for GLACIER (and GLACIER_DA) uploads.

I can wire a guard in the header rewrite path or adjust this check; confirm the intended behavior for combinations (3) and (4) in the PR description.


3692-3699: Force-expire-on-get: consider GLACIER_DA

If deep-archive should also be expired-on-get, extend the check; if not, please document the difference.

Apply:

-        const storage_class = s3_utils.parse_storage_class(stat.xattr[Glacier.STORAGE_CLASS_XATTR]);
-        if (storage_class !== s3_utils.STORAGE_CLASS_GLACIER) return;
+        const storage_class = s3_utils.parse_storage_class(stat.xattr[Glacier.STORAGE_CLASS_XATTR]);
+        if (![s3_utils.STORAGE_CLASS_GLACIER, s3_utils.STORAGE_CLASS_GLACIER_DA].includes(storage_class)) return;

3613-3623: Deduplicate Glacier-class logic by reusing s3_utils

Prefer importing a shared set (e.g., s3_utils.GLACIER_CLASSES) instead of re-declaring glacier_storage_classes locally.

Apply:

-        const glacier_storage_classes = [
-            s3_utils.STORAGE_CLASS_GLACIER,
-            s3_utils.STORAGE_CLASS_GLACIER_DA,
-            s3_utils.STORAGE_CLASS_GLACIER_IR,
-        ];
+        const glacier_storage_classes = s3_utils.GLACIER_CLASSES;
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 29d20c7 and d8ab6ce.

📒 Files selected for processing (5)
  • config.js (1 hunks)
  • src/endpoint/s3/s3_utils.js (3 hunks)
  • src/sdk/namespace_fs.js (1 hunks)
  • src/sdk/nb.d.ts (1 hunks)
  • src/util/http_utils.js (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/endpoint/s3/s3_utils.js (2)
src/endpoint/s3/ops/s3_put_object.js (1)
  • storage_class (23-23)
src/endpoint/s3/ops/s3_post_object_uploads.js (1)
  • storage_class (16-16)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Build Noobaa Image
  • GitHub Check: run-jest-unit-tests
🔇 Additional comments (1)
src/sdk/nb.d.ts (1)

20-21: Type extension looks good

Adding 'GLACIER_DA' to StorageClass aligns the types with the new class.

@tangledbytes tangledbytes force-pushed the utkarsh/feat/glacier-da-storage-class branch from d8ab6ce to fcd6987 Compare September 15, 2025 12:56
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
src/endpoint/s3/s3_utils.js (2)

383-390: Map invalid storage class to S3Error instead of generic Error

Throwing a generic Error likely surfaces as 500. Return a 400-class S3Error.

-    throw new Error(`No such s3 storage class ${storage_class}`);
+    throw new S3Error({ ...S3Error.InvalidArgument, message: `Unsupported storage class ${storage_class}` });

825-833: Scope the override to upload ops to avoid surprising mutations

Limit override to put_object and create MPU; keep behavior centralized and future‑proof.

-function override_storage_class(req) {
-    if (
-        config.NSFS_GLACIER_FORCE_STORAGE_CLASS &&
-        parse_storage_class_header(req) === STORAGE_CLASS_STANDARD
-    ) {
-        req.headers['x-amz-storage-class'] = STORAGE_CLASS_GLACIER;
-    }
-}
+function override_storage_class(req) {
+    if (!config.NSFS_GLACIER_FORCE_STORAGE_CLASS) return;
+    // Only for ops where storage class is meaningful
+    if (!(req.op_name === 'put_object' || req.op_name === 'post_object_uploads')) return;
+    if (parse_storage_class_header(req) === STORAGE_CLASS_STANDARD) {
+        req.headers['x-amz-storage-class'] = STORAGE_CLASS_GLACIER;
+    }
+}
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d8ab6ce and fcd6987.

📒 Files selected for processing (5)
  • config.js (1 hunks)
  • src/endpoint/s3/s3_rest.js (1 hunks)
  • src/endpoint/s3/s3_utils.js (4 hunks)
  • src/sdk/namespace_fs.js (1 hunks)
  • src/sdk/nb.d.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • config.js
  • src/sdk/nb.d.ts
  • src/sdk/namespace_fs.js
🧰 Additional context used
🧬 Code graph analysis (1)
src/endpoint/s3/s3_utils.js (2)
src/endpoint/s3/ops/s3_put_object.js (1)
  • storage_class (23-23)
src/endpoint/s3/ops/s3_post_object_uploads.js (1)
  • storage_class (16-16)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build Noobaa Image
  • GitHub Check: run-package-lock-validation
  • GitHub Check: run-jest-unit-tests
🔇 Additional comments (3)
src/endpoint/s3/s3_utils.js (3)

23-25: Add GLACIER_DA constant — OK

Constant addition and typing look good.


837-838: Export GLACIER_DA — OK

Export aligns with added constant and parsing.


879-879: Export override_storage_class — OK

Makes the helper available to s3_rest flow.

Comment on lines +110 to +112
// Will override the storage class if configured
s3_utils.override_storage_class(req);

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Do not mutate headers before SigV4 auth; move override after authenticate_request()

Changing x-amz-storage-class pre-auth can invalidate AWS SigV4 when that header is signed. Apply override after authenticate_request(req) and before authorize_request(req); also limit to relevant ops.

Apply this diff:

-    // Will override the storage class if configured
-    s3_utils.override_storage_class(req);
+    // (moved below) override storage class only after signature auth

Place this block immediately after authenticate_request(req);:

+    // Safe to mutate after signature auth and before policy checks.
+    // Limit to ops that accept x-amz-storage-class (PUT object, MPU create).
+    if (req.op_name === 'put_object' || req.op_name === 'post_object_uploads') {
+        s3_utils.override_storage_class(req);
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Will override the storage class if configured
s3_utils.override_storage_class(req);
// (moved below) override storage class only after signature auth
Suggested change
// Will override the storage class if configured
s3_utils.override_storage_class(req);
// Safe to mutate after signature auth and before policy checks.
// Limit to ops that accept x-amz-storage-class (PUT object, MPU create).
if (req.op_name === 'put_object' || req.op_name === 'post_object_uploads') {
s3_utils.override_storage_class(req);
}
🤖 Prompt for AI Agents
In src/endpoint/s3/s3_rest.js around lines 110-112, the code mutates x-amz-*
headers before SigV4 authentication by calling
s3_utils.override_storage_class(req); move this call so it runs immediately
after authenticate_request(req) and before authorize_request(req) to avoid
invalidating signed headers, and restrict the override to only the S3 operations
that accept a storage-class header (e.g., PUT object/ multipart upload
completes) rather than running for all requests.

@tangledbytes tangledbytes requested review from a team and aayushchouhan09 and removed request for a team September 15, 2025 14:03
@tangledbytes tangledbytes force-pushed the utkarsh/feat/glacier-da-storage-class branch from fcd6987 to 1ddb315 Compare September 26, 2025 12:16
@tangledbytes tangledbytes force-pushed the utkarsh/feat/glacier-da-storage-class branch from 1ddb315 to 65f77f1 Compare October 8, 2025 11:23
const glacier_storage_classes = [
s3_utils.STORAGE_CLASS_GLACIER,
s3_utils.STORAGE_CLASS_GLACIER_DA,
s3_utils.STORAGE_CLASS_GLACIER_IR,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Newly added storage class in STORAGE_CLASS_GLACIER_DA, And STORAGE_CLASS_GLACIER_IR was missing in previous check? Is there any reason for adding it now?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, these are newly added. GLACIER_DA is something DBS3 specific and they want to support this one as well on the other hand our implementation is quite close to GLACIER_IR (instant retrieval) because DBS2 usually triggers the retrieval within 10 minutes and the restores usually finish within 20 minutes of submitting the request unlike AWS S3 which can take couple of hours.

@tangledbytes tangledbytes force-pushed the utkarsh/feat/glacier-da-storage-class branch from 65f77f1 to 1612a63 Compare October 13, 2025 13:10
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/endpoint/s3/s3_utils.js (1)

23-24: Clarify the comment for STORAGE_CLASS_GLACIER_DA.

The comment "DBS3 specific Storage Class" is unclear. Consider updating it to be more descriptive, similar to the other storage class constants. For example:

 /** @type {nb.StorageClass} */
-const STORAGE_CLASS_GLACIER_DA = 'GLACIER_DA'; // "DBS3 specific Storage Class"
+const STORAGE_CLASS_GLACIER_DA = 'GLACIER_DA'; // "S3 Glacier Deep Archive"
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 65f77f1 and 1612a63.

📒 Files selected for processing (5)
  • config.js (1 hunks)
  • src/endpoint/s3/s3_rest.js (1 hunks)
  • src/endpoint/s3/s3_utils.js (4 hunks)
  • src/sdk/namespace_fs.js (1 hunks)
  • src/sdk/nb.d.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • src/endpoint/s3/s3_rest.js
  • src/sdk/namespace_fs.js
  • config.js
  • src/sdk/nb.d.ts
🧰 Additional context used
🧬 Code graph analysis (1)
src/endpoint/s3/s3_utils.js (3)
src/endpoint/s3/ops/s3_put_object.js (1)
  • storage_class (23-23)
src/endpoint/s3/ops/s3_post_object_uploads.js (1)
  • storage_class (16-16)
config.js (1)
  • config (7-7)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: run-package-lock-validation
  • GitHub Check: Build Noobaa Image
  • GitHub Check: run-jest-unit-tests
🔇 Additional comments (2)
src/endpoint/s3/s3_utils.js (2)

387-387: LGTM!

The extension to recognize STORAGE_CLASS_GLACIER_DA is correctly implemented and follows the existing pattern for other storage classes.


837-837: override_storage_class invocation order verified
override_storage_class(req) at line 111 in s3_rest.js runs before any parse_storage_class_header usage in s3_put_object.js and s3_post_object_uploads.js. No further changes required.

Comment on lines +825 to +832
function override_storage_class(req) {
if (
config.NSFS_GLACIER_FORCE_STORAGE_CLASS &&
parse_storage_class_header(req) === STORAGE_CLASS_STANDARD
) {
req.headers['x-amz-storage-class'] = STORAGE_CLASS_GLACIER;
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add JSDoc documentation for side effects and behavior.

The override_storage_class function mutates the request object, which is a side effect that should be documented. Additionally, the function lacks documentation explaining its purpose and when it should be called.

Apply this diff to add JSDoc documentation:

+/**
+ * override_storage_class rewrites the storage class header to GLACIER when
+ * NSFS_GLACIER_FORCE_STORAGE_CLASS is enabled and the incoming storage class
+ * is STANDARD (or not specified, which defaults to STANDARD).
+ * 
+ * This function mutates req.headers as a side effect.
+ * 
+ * @param {nb.S3Request} req - The S3 request object
+ */
 function override_storage_class(req) {
     if (
         config.NSFS_GLACIER_FORCE_STORAGE_CLASS &&
         parse_storage_class_header(req) === STORAGE_CLASS_STANDARD
     ) {
         req.headers['x-amz-storage-class'] = STORAGE_CLASS_GLACIER;
     }
 }
🤖 Prompt for AI Agents
In src/endpoint/s3/s3_utils.js around lines 825 to 832, the function
override_storage_class lacks JSDoc explaining its purpose and side effects: add
a JSDoc block immediately above the function that describes what the function
does (overrides x-amz-storage-class to GLACIER when
NSFS_GLACIER_FORCE_STORAGE_CLASS is set and the parsed header equals STANDARD),
documents the parameter (req: incoming request object with headers), notes that
it mutates req.headers (side effect), states there is no return value, and
indicates when it should be called (e.g., before request is forwarded/sent).
Ensure the doc mentions edge cases (only runs when config flag true) and
optional behavior for clarity.

@tangledbytes tangledbytes merged commit 2367740 into noobaa:master Oct 13, 2025
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants